منابع مشابه
On logarithmic derivatives
© Bulletin de la S. M. F., 1968, tous droits réservés. L’accès aux archives de la revue « Bulletin de la S. M. F. » (http://smf. emath.fr/Publications/Bulletin/Presentation.html) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impress...
متن کاملOn the Growth of Logarithmic Differences, Difference Quotients and Logarithmic Derivatives of Meromorphic Functions
Abstract. We obtain growth comparison results of logarithmic differences, difference quotients and logarithmic derivatives for finite order meromorphic functions. Our results are both generalizations and extensions of previous results. We construct examples showing that the results obtained are best possible in certain sense. Our findings show that there are marked differences between the growt...
متن کاملLogarithmic Derivatives of Solutions to Linear Differential Equations
Given an ordinary differential field K of characteristic zero, it is known that if y and 1/y satisfy linear differential equations with coefficients in K, then y/y is algebraic over K. We present a new short proof of this fact using Gröbner basis techniques and give a direct method for finding a polynomial over K that y/y satisfies. Moreover, we provide explicit degree bounds and extend the res...
متن کاملQuaternionic Gamma Functions and Their Logarithmic Derivatives as Spectral Functions
We establish Connes’s local trace formula (related to the explicit formulae of number theory) for the quaternions. This is done as an application of a study of the central operator H = log(|x|) + log(|y|) in the context of invariant harmonic analysis. The multiplicative analysis of the additive Fourier transform gives a spectral interpretation to generalized “Tate Gamma functions” (closely akin...
متن کاملDerivatives of Logarithmic Stationary Distributions for Policy Gradient Reinforcement Learning
Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distribution to changes in the policy parameter. Although the bias introduced by this omi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Bulletin de la Société mathématique de France
سال: 1968
ISSN: 0037-9484,2102-622X
DOI: 10.24033/bsmf.1659